83 research outputs found

    Canonical correlation analysis based on sparse penalty and through rank-1 matrix approximation

    No full text
    Canonical correlation analysis (CCA) is a well-known technique used to characterize the relationship between two sets of multidimensional variables by finding linear combinations of variables with maximal correlation. Sparse CCA and smooth or regularized CCA are two widely used variants of CCA because of the improved interpretability of the former and the better performance of the later. So far the cross-matrix product of the two sets of multidimensional variables has been widely used for the derivation of these variants. In this paper two new algorithms for sparse CCA and smooth CCA are proposed. These algorithms differ from the existing ones in their derivation which is based on penalized rank one matrix approximation and the orthogonal projectors onto the space spanned by the columns of the two sets of multidimensional variables instead of the simple cross-matrix product. The performance and effectiveness of the proposed algorithms are tested on simulated experiments. On these results it can be observed that they outperforms the state of the art sparse CCA algorithms

    Sparse Canonical Correlation Analysis Based on Rank-1 Matrix Approximation and its Application for fMRI Signals

    No full text
    International audienceCanonical correlation analysis (CCA) is a well-known technique used to characterize the relationship between two sets of multidimensional variables by finding linear combinations of variables with maximal correlation. Sparse CCA or regularized CCA are two widely used variants of CCA because of the improved interpretability of the former and the better performance of the later. So far the cross-matrix product of the two sets of multidimensional variables has been widely used for the derivation of these variants. In this paper a new algorithm for sparse CCA is proposed. This algorithm differs from the existing ones in their derivation which is based on penalized rank one matrix approximation and the orthogonal projectors onto the space spanned by the two sets of multidimensional variables instead of the simple cross-matrix product. The performance and effectiveness of the proposed algorithm are tested on simulated experiments. On these results it can be observed that they outperform the state of the art sparse CCA algorithms

    Learning radial basis function neural networks with noisy input-output data set

    Get PDF
    This paper deals with the problem of learning radial basis function neural networks to approximate non linear L2 function from Rd to R. Hybrid algorithms are mostly used for this task. Unsupervised learning techniques are used to estimate the center and width parameters of the radial functions and supervised learning techniques are used to estimate the linear parameters. Supervised learning techniques are generally based on the least squares (LS) estimates (or criterion). This estimator is optimal when the training set (zi, yi)i=1,2,..,q is composed of noisy outputs yi, i = 1, .., q and exactly known inputs zi, i = 1, .., q. However, when collecting the experimental data, it is seldom possible to avoid noise when measuring the inputs zi. The use of least squares estimator produces a biased estimation of the linear parameters in the case of noisy input output training data set, which leads to an erroneous output estimation. This paper proposes the use of an estimation procedure based on the error in variables model to estimate the linear parameters (for supervised learning) when the training set is made up of input and output data corrupted by noise. The geometrical interpretation of the proposed estimation criterion is given in order to illustrate its advantage with respect to the least squares criterion. The improved performances in non linear function approximation is illustrated with a simulation example.Cet article traite du problème de l'apprentissage des réseaux de neurones à fonctions radiales de base pour l'approximation de fonctions non linéaires L2 de Rd vers R. Pour ce type de problème, les algorithmes hybrides sont les plus utilisés. Ils font appel à des techniques d'apprentissage non supervisées pour l'estimation des centres et des paramètres d'échelle des fonctions radiales, et à des techniques d'apprentissage supervisées pour l'estimation des paramètres linéaires. Les méthodes d'apprentissage supervisées reposent généralement sur l'estimateur (ou le critère) des moindres carrées (MC). Cet estimateur est optimal dans le cas où le jeu de données d'apprentissage (zi, yi)i=1,2,..,q est constitué de sorties yi, i = 1, .., q bruitées et d'entrées zi, i = 1, .., q exactes. Cependant lors de la collecte des données expérimentales il est rarement possible de mesurer l'entrée zi sans bruit. L'utilisation de l'estimateur des MC produit une estimation biaisée des paramètres linéaires dans le cas où le jeux de données d'apprentissage est à entrées et sorties bruitées, ce qui engendre une estimation erronée de la sortie. Cet article propose l'utilisation d'une procédure d'estimation fondée sur le modèle avec variables entachées d'erreurs pour l'estimation des paramètres linéaires (pour l'apprentissage supervisé) dans le cas où le jeux de données d'apprentissage est à entrées et sorties bruitées. L'interprétation géométrique du critère d'estimation proposé est établie afin de mettre en évidence son avantage relativement au critère des moindres carrés. L'amélioration des performances en terme d'approximation de fonctions non linéaires est illustrée sur un exemple

    A model selection approach to signal denoising using Kullback's symmetric divergence

    No full text
    We consider the determination of a soft/hard coefficients threshold for signal recovery embedded in additive Gaussian noise. This is closely related to the problem of variable selection in linear regression. Viewing the denoising problem as a model selection one, we propose a new information theoretical model selection approach to signal denoising. We first construct a statistical model for the unknown signal and then try to find the best approximating model (corresponding to the denoised signal) from a set of candidates. We adopt the Kullback's symmetric divergence as a measure of similarity between the unknown model and the candidate model. The best approximating model is the one that minimizes an unbiased estimator of this divergence. The advantage of a denoising method based on model selection over classical thresholding approaches, resides in the fact that the threshold is determined automatically without the need to estimate the noise variance. The proposed denoising method, called KICc-denoising (Kullback Information Criterion corrected) is compared with cross validation (CV), minimum description length (MDL) and the classical methods SureShrink and VisuShrink via a simulation study based on three different type of signals: chirp, seismic and piecewise polynomial

    Multivariate regression model selection from small samples using Kullback's symmetric divergence

    No full text
    The Kullback Information Criterion, KIC, and its univariate bias-corrected version, KICc, are two new developed criteria for model selection. The two criteria can be viewed as estimators of the expected Kullback symmetric divergence and they have a fixe

    Maximum likelihood blind image restoration via alternating minimization

    No full text
    A new algorithm for Maximum likelihood blind image restoration is presented in this paper. It is obtained by modeling the original image and the additive noise as multivariate Gaussian processes with unknown covariance matrices. The blurring process is specified by its point spread function, which is also unknown. Estimations of the original image and the blur are derived by alternating minimization of the Kullback-Leibler divergence. The algorithm presents the advantage to provide closed form expressions for the parameters to be updated and to converge only after few iterations. A simulation example that illustrates the effectiveness of the proposed algorithm is presented

    An iterative projections algorithm for ML factor analysis

    No full text
    Alternating minimization of the infonnation divergence is used to derive an effective algorithm for maximum likelihood (ML) factor analysis. The proposed algorithm is derived as an iterative alternating projections procedure on a model family of probability distributions defined on the factor analysis model and a desired family of probability distributions constrained to be concentrated on the observed data. The algorithm presents the advantage of being simple to implement and stable to converge. A simulation example that illustrates the effectiveness of the proposed algorithm for ML factor analysis is presented
    • …
    corecore